36 research outputs found
EmbraceNet for Activity: A Deep Multimodal Fusion Architecture for Activity Recognition
Human activity recognition using multiple sensors is a challenging but
promising task in recent decades. In this paper, we propose a deep multimodal
fusion model for activity recognition based on the recently proposed feature
fusion architecture named EmbraceNet. Our model processes each sensor data
independently, combines the features with the EmbraceNet architecture, and
post-processes the fused feature to predict the activity. In addition, we
propose additional processes to boost the performance of our model. We submit
the results obtained from our proposed model to the SHL recognition challenge
with the team name "Yonsei-MCML."Comment: Accepted in HASCA at ACM UbiComp/ISWC 2019, won the 2nd place in the
SHL Recognition Challenge 201
Multimedia Semantic Integrity Assessment Using Joint Embedding Of Images And Text
Real world multimedia data is often composed of multiple modalities such as
an image or a video with associated text (e.g. captions, user comments, etc.)
and metadata. Such multimodal data packages are prone to manipulations, where a
subset of these modalities can be altered to misrepresent or repurpose data
packages, with possible malicious intent. It is, therefore, important to
develop methods to assess or verify the integrity of these multimedia packages.
Using computer vision and natural language processing methods to directly
compare the image (or video) and the associated caption to verify the integrity
of a media package is only possible for a limited set of objects and scenes. In
this paper, we present a novel deep learning-based approach for assessing the
semantic integrity of multimedia packages containing images and captions, using
a reference set of multimedia packages. We construct a joint embedding of
images and captions with deep multimodal representation learning on the
reference dataset in a framework that also provides image-caption consistency
scores (ICCSs). The integrity of query media packages is assessed as the
inlierness of the query ICCSs with respect to the reference dataset. We present
the MultimodAl Information Manipulation dataset (MAIM), a new dataset of media
packages from Flickr, which we make available to the research community. We use
both the newly created dataset as well as Flickr30K and MS COCO datasets to
quantitatively evaluate our proposed approach. The reference dataset does not
contain unmanipulated versions of tampered query packages. Our method is able
to achieve F1 scores of 0.75, 0.89 and 0.94 on MAIM, Flickr30K and MS COCO,
respectively, for detecting semantically incoherent media packages.Comment: *Ayush Jaiswal and Ekraam Sabir contributed equally to the work in
this pape
QuesNet: A Unified Representation for Heterogeneous Test Questions
Understanding learning materials (e.g. test questions) is a crucial issue in
online learning systems, which can promote many applications in education
domain. Unfortunately, many supervised approaches suffer from the problem of
scarce human labeled data, whereas abundant unlabeled resources are highly
underutilized. To alleviate this problem, an effective solution is to use
pre-trained representations for question understanding. However, existing
pre-training methods in NLP area are infeasible to learn test question
representations due to several domain-specific characteristics in education.
First, questions usually comprise of heterogeneous data including content text,
images and side information. Second, there exists both basic linguistic
information as well as domain logic and knowledge. To this end, in this paper,
we propose a novel pre-training method, namely QuesNet, for comprehensively
learning question representations. Specifically, we first design a unified
framework to aggregate question information with its heterogeneous inputs into
a comprehensive vector. Then we propose a two-level hierarchical pre-training
algorithm to learn better understanding of test questions in an unsupervised
way. Here, a novel holed language model objective is developed to extract
low-level linguistic features, and a domain-oriented objective is proposed to
learn high-level logic and knowledge. Moreover, we show that QuesNet has good
capability of being fine-tuned in many question-based tasks. We conduct
extensive experiments on large-scale real-world question data, where the
experimental results clearly demonstrate the effectiveness of QuesNet for
question understanding as well as its superior applicability
Harnessing AI for Speech Reconstruction using Multi-view Silent Video Feed
Speechreading or lipreading is the technique of understanding and getting
phonetic features from a speaker's visual features such as movement of lips,
face, teeth and tongue. It has a wide range of multimedia applications such as
in surveillance, Internet telephony, and as an aid to a person with hearing
impairments. However, most of the work in speechreading has been limited to
text generation from silent videos. Recently, research has started venturing
into generating (audio) speech from silent video sequences but there have been
no developments thus far in dealing with divergent views and poses of a
speaker. Thus although, we have multiple camera feeds for the speech of a user,
but we have failed in using these multiple video feeds for dealing with the
different poses. To this end, this paper presents the world's first ever
multi-view speech reading and reconstruction system. This work encompasses the
boundaries of multimedia research by putting forth a model which leverages
silent video feeds from multiple cameras recording the same subject to generate
intelligent speech for a speaker. Initial results confirm the usefulness of
exploiting multiple camera views in building an efficient speech reading and
reconstruction system. It further shows the optimal placement of cameras which
would lead to the maximum intelligibility of speech. Next, it lays out various
innovative applications for the proposed system focusing on its potential
prodigious impact in not just security arena but in many other multimedia
analytics problems.Comment: 2018 ACM Multimedia Conference (MM '18), October 22--26, 2018, Seoul,
Republic of Kore